We use cookies on this site to enhance your experience.
By selecting “Accept” and continuing to use this website, you consent to the use of cookies.
Originally published September 5, 2025 | Research Note adapted from remarks delivered at the 2025 Teaching Excellence Awards Ceremony by Debora VanNijnatten, academic director, Teaching Excellence and Innovation.
Concerns about rising incidences of academic dishonesty permeated many of Teaching Excellence and Innovation’s programming and engagement with instructors this past year. We heard concerns from all faculties, from new and seasoned instructors teaching across levels, and with respect to all aspects of course assessment and teaching activities.
At the root of these concerns is a perception that things are changing quickly on the academic integrity front, leaving faculty and staff feeling like they are standing on shifting sands. The approaches and tools we have applied in the past no longer seem fit-for-purpose.
In this month’s Research Note, I highlight what seem to be the most salient aspects of the academic integrity challenge and provide some reflections on how we might respond here at Laurier.
The first challenge is being able to recognize an academic integrity violation when we see it. This has never been a completely straightforward undertaking, but there is no doubt it has now become more complex.
Academic integrity efforts at the undergraduate level in the postsecondary system have traditionally focused on ensuring that students understand what plagiarism is and how to avoid it (and cite properly); that they prepare for and rely on their own knowledge for tests and exams (and avoid cheating); and that they complete assessments on their own (and not in collusion with others) (ENQA 2024, p.8). While we couldn’t always catch students when they engaged in plagiarism, cheating or collusion, we did know - more or less - what we were looking for.
Why does it matter that we recognize and catch it? Because the entire notion of academic integrity is based on our belief in “the honesty and originality of academic assessment” (Davis 2023). Davis calls this “the academic honour code,” and she says it has been “interwoven into the fabric of institutions of learning over centuries and is foundational to educational values.” Key to this code is the expectation that the student will produce original work in their program of study, and that they will take pride in this. The code works because all stakeholders – students, their parents and potential employers – want the degree to have value (QAA 2022).
But new technologies have muddied this landscape considerably, and in an era of Generative AI (GenAI) we may need to reconsider how we think about and safeguard academic integrity (see Alarie & Cockfield, 2021). ChatGPT – now widely used by university students – doesn’t simply reproduce passages of text published on the Internet (in which case it could be detected by software such as Turnitin and identified as plagiarism). Instead, ChatGPT constructs wholly new arguments by breaking vast amounts of text into smaller chunks called vectors, and combining and recombining these vectors to identify patterns and meaning (Ramlochan 2023). These outputs are difficult to identify as human- vs. machine-generated, and in this context plagiarism (passing off AI-generated text as one’s own) is difficult to identify.
I can’t tell you how many corridor conversations I have had over the past year about how to identify the unauthorized use of GenAI in an assignment or on an exam – there is the hallucinated citations, the examples that bear no relation to the actual course content, the overly complex writing, the overuse of bullets, and a writing tone that is supremely confident, despite the wealth of inaccurate data or quotes in the text (East Central College 2025). But it can be hard to detect and very difficult to prove as there are no reliable software tools (see, for example, Ramlochan 2023). Instructors can spend hours of close reading, often across multiple papers, to confirm.
This is hard work and is frankly impractical in large-lecture courses with hundreds of students and teams of IAs or GTAs who are themselves most likely not trained on how to detect AI use.
The second challenge is understanding why this is happening.
It also used to be clearer to me why students might be taking shortcuts. I have talked to literally hundreds of students over the past 25 years when engaged in academic misconduct investigations in first year and senior undergraduate courses. Students were often in distress and out of time (so poor decision-making or time management play a role here), while others were under enormous pressure to get high grades (i.e., when competing for co-op, law school, or another academic or career milestone).
The research tells us that a series of individual factors play into student academic decision-making, including having opportunity (where there is a perception of no reprisal), incentive, pressure or need (Holden, Norris & Kuhlmeier, 2021), as well as rationalization that such behaviour is okay (particularly where there is ambiguity about what constitutes academic dishonesty). A further influential contextual factor is “the extent to which students perceive that their peers cheat”, as noted by McCabe et al (2012).
In the current context, four additional factors exacerbate these dynamics:
The most significant, of course, has to be the rapid rise of new technologies and the ways in which online educational platforms, tools and information processing has made it possible to – frankly – outsource most of the work that students are supposed to be doing on their own. And survey research confirms widespread use of GenAI; while numbers do vary across surveys, anywhere between 59% (KPMG 2024) to more than 80% (Freeman 2025) of students are using AI in their classes. If GenAI tools are so readily available and they can perform the full range of assessment-related tasks; if there is a perception that everyone else is using them; and if there is ambiguity as to the boundaries of what is academically honest (as there is right now), then incentives are working in the direction of misuse.
The second factor is the preparation that students get in the secondary education system. We know that GenAI came into widespread use without policies being in place, and it has been difficult for teachers to walk this back and police it. One commentator has noted recently that students (particularly during and after COVID) were “taught to prioritise conformity over creativity” with the result that this “[extinguishes] the desire to produce work which is original and individualistic in later years” (Draper 2022, p.2). Still others point to the lack of rigour and structure in the public system for completing coursework, and the inability of students to discipline themselves (Huish 2022). When they are confronted with the workloads and structure of university, students may feel underprepared and super stressed, and may be more likely to take shortcuts.
The third is “the assault of online predatory ‘cheating’ companies on higher education systems” which provide easily accessible, convenient and bespoke options for students to get around assignment requirements (ENQA 2024, p.6). Never has it been so easy to have someone (or something) do your work for you, as long as you can pay.
And the fourth is that student perspectives on their own education have changed. They currently display a noticeable instrumentalism in attitudes towards the role that education plays in their lives – i.e., to get a job and gain upward mobility, as successive years of survey data from the Canadian University Survey Consortium shows. I think this represents the deepest fear for faculty – namely, that students may not understand the need for integrity in their education, in which case the academic honour code, based as it is on original work as being at the core of the value of a university degree, is on very shaky ground, indeed.
We need an approach to safeguarding academic integrity at Laurier that is developmental and comprehensive, and that brings together good things happening all across Laurier.
A developmental approach emphasizes understanding and supporting student growth and change, recognizing that students are coming to us with a certain level of preparedness, a certain culture, and learned behaviours (Walker 2008), and we need to help them adjust, learn and internalize the academic honour code over the course of their time at Laurier. It also means that we need to move away from negative framing of academic integrity, and towards helping students see it as a skill (tied to values to be acquired and honed at university, as per Davis 2023).
This is essentially about meeting students where they are at – and here we can support and build on all the work that is already being undertaken in this area by our colleagues in Student Affairs (especially Student Success), in Faculty offices across the institution, and in the Provost’s Office.
On the teaching side, we need to think about what more can we be doing in the classroom to support students in understanding what lies at the root of the academic honour code, why it is important and what that means for their conduct in courses. How do we have those conversations with students, what exercises can we undertake in our courses, what assessments can hammer home academic integrity messages? And what does this look like as part of the challenges posed by generative AI, because the two cannot be untangled.
But we also need to think comprehensively. How do we bring together all the good work being undertaken across campus, harness the learning, build a community around active engagement and dialogue with the conceptual underpinnings of academic integrity and its operational application in this new era of GenAI?
As one pillar of this conversation, TEI is launching a Faculty Learning Community on AIx2 (generative AI and academic integrity), to create a place where instructors can come together, share insights and experiences, capture innovation and actively think about how to link up these efforts.
Alarie, Benjamin, and Arthur Cockfield, and GPT-3 (2021), “Will Machines Replace Us? Machine-Authored Texts and the Future of Scholarship”. Law, Technology and Humans 3 (2):5-11.
Davis, Annemarie (2023), “Academic integrity in the time of contradictions,” Cogent Education, 10, 2, 2289307, DOI: 10.1080/2331186X.2023.2289307
Draper, Michael (2022), “Revisiting academic integrity from a student perspective,” Quality Compass, November.
East Central College (2025), “Detecting AI-Generated Text: Things to Watch For” AI Essentials for ECC Faculty.
European Association for Quality Assurance in Higher Education (2024), “ACADEMIC INTEGRITY Supporting Cultures of Academic Integrity: The role of quality assurance agencies in promoting and enhancing academic integrity and ensuring learning,” ENQA Occasional Paper.
Freeman, Josh, with a Foreword by Professor Janice Kay (2025), Student Generative AI Survey 2025. HEPI and Kortext.
Holden, Olivia L., Meghan E. Norris and Valerie A. Kuhlmeier (2021), “Academic Integrity in Online Assessment: A Research Review” Frontiers in Education, Volume 6.
Huish, R. as quoted in J.Wong (2022), “Pandemic learning left students feeling behind. Post-secondary transition courses aim to get them on track” CBC News, November 12.
KPMG (2024), KPMG Generative AI Adoption Index Survey.
McCabe, D. L., Butterfield, K. D., & Trevino, L. K. (2012). Cheating in college: Why students do it and what educators can do about it. Baltimore, MD: John Hopkins University Press.
Quality Assurance Agency of Higher Education (2022), “Revisiting academic integrity from a student perspective” Quality Compass, November.
Ramlochan, Sunil (2023), “The Truth About AI Detectors - More Harm Than Good” Prompt Engineering and AI Institute,” November 6.
Walker, M. (2008), “Working with college students & student development theory primer,” Retrieved October 11, 2018.